542 research outputs found

    Effect of butylphthalide in patients with vascular cognitive impairment

    Get PDF
    Purpose: To study the effects of butylphthalide in patients with vascular cognitive impairment. Method: Sixty patients with vascular cognitive impairment were randomly divided into control group and butylphthalide (NBP) group (n = 30). Control group received blood pressure control, blood sugar control, and lipid-lowering therapies, while NBP group received butylphthalide capsules (200 mg, thrice daily). Treatments in both groups lasted for 14 days. Thereafter, Hasegawa Dementia Scale (HDS), Mini-Mental State Examination (MMSE), Activities of Daily Living Scale (ADL), and event-related potential (P300) were used to evaluate the effects of butylphthalide treatment. Result: Following 14 days of treatment, HDS, MMSE and ADL scores of NBP group were significantly higher than those of the control group (p < 0.05). The P300 latency of NBP group was shorter than that of control group, while P300 amplitude was higher than that of control group (p < 0.05). Conclusion: Butylphthalide treatment achieves higher scores of HDS, MMSE and ADL scores, but shorter P300 latency. These results provided good evidence of the effectiveness of butylphthalide therapy in the management of vascular cognitive impairment. However, further clinical trials are recommended prior to application in clinical practice

    A GAN-based Tunable Image Compression System

    Full text link
    The method of importance map has been widely adopted in DNN-based lossy image compression to achieve bit allocation according to the importance of image contents. However, insufficient allocation of bits in non-important regions often leads to severe distortion at low bpp (bits per pixel), which hampers the development of efficient content-weighted image compression systems. This paper rethinks content-based compression by using Generative Adversarial Network (GAN) to reconstruct the non-important regions. Moreover, multiscale pyramid decomposition is applied to both the encoder and the discriminator to achieve global compression of high-resolution images. A tunable compression scheme is also proposed in this paper to compress an image to any specific compression ratio without retraining the model. The experimental results show that our proposed method improves MS-SSIM by more than 10.3% compared to the recently reported GAN-based method to achieve the same low bpp (0.05) on the Kodak dataset

    Mismatched Training Data Enhancement for Automatic Recognition of Children’s Speech using DNN-HMM

    Get PDF
    The increasing profusion of commercial automatic speech recognition technology applications has been driven by big-data techniques, using high quality labelled speech datasets. Children's speech has greater time and frequency domain variability than typical adult speech, lacks good large scale training data, and presents difficulties relating to capture quality. Each of these factors reduces the performance of systems that automatically recognise children's speech. In this paper, children's speech recognition is investigated using a hybrid acoustic modelling approach based on deep neural networks and Gaussian mixture models with hidden Markov model back ends. We explore the incorporation of mismatched training data to achieve a better acoustic model and improve performance in the face of limited training data, as well as training data augmentation using noise. We also explore two arrangements for vocal tract length normalisation and a gender-based data selection technique suitable for training a children's speech recogniser

    Integration of Pre-trained Protein Language Models into Geometric Deep Learning Networks

    Full text link
    Geometric deep learning has recently achieved great success in non-Euclidean domains, and learning on 3D structures of large biomolecules is emerging as a distinct research area. However, its efficacy is largely constrained due to the limited quantity of structural data. Meanwhile, protein language models trained on substantial 1D sequences have shown burgeoning capabilities with scale in a broad range of applications. Several previous studies consider combining these different protein modalities to promote the representation power of geometric neural networks, but fail to present a comprehensive understanding of their benefits. In this work, we integrate the knowledge learned by well-trained protein language models into several state-of-the-art geometric networks and evaluate a variety of protein representation learning benchmarks, including protein-protein interface prediction, model quality assessment, protein-protein rigid-body docking, and binding affinity prediction. Our findings show an overall improvement of 20% over baselines. Strong evidence indicates that the incorporation of protein language models' knowledge enhances geometric networks' capacity by a significant margin and can be generalized to complex tasks
    • …
    corecore